epsilon -stationary point
An Accelerated Algorithm for Stochastic Bilevel Optimization under Unbounded Smoothness
This paper investigates a class of stochastic bilevel optimization problems where the upper-level function is nonconvex with potentially unbounded smoothness and the lower-level problem is strongly convex. These problems have significant applications in sequential data learning, such as text classification using recurrent neural networks. The unbounded smoothness is characterized by the smoothness constant of the upper-level function scaling linearly with the gradient norm, lacking a uniform upper bound. Existing state-of-the-art algorithms require \widetilde{O}(\epsilon {-4}) oracle calls of stochastic gradient or Hessian/Jacobian-vector product to find an \epsilon -stationary point. However, it remains unclear if we can further improve the convergence rate when the assumptions for the function in the population level also hold for each random realization almost surely (e.g., Lipschitzness of each realization of the stochastic gradient).
A Single-Loop Accelerated Extra-Gradient Difference Algorithm with Improved Complexity Bounds for Constrained Minimax Optimization
In this paper, we propose a novel extra-gradient difference acceleration algorithm for solving constrained nonconvex-nonconcave (NC-NC) minimax problems. In particular, we design a new extra-gradient difference step to obtain an important quasi-cocoercivity property, which plays a key role to significantly improve the convergence rate in the constrained NC-NC setting without additional structural assumption. Then momentum acceleration is also introduced into our dual accelerating update step. As the special cases of the constrained NC-NC setting, our algorithm can also obtain the same complexity \mathcal{O}(\epsilon {-2}) for both the nonconvex-concave (NC-C) and convex-nonconcave (C-NC) cases, while the best-known complexity bounds are \widetilde{\mathcal{O}}(\epsilon {-2.5}) for the NC-C case and \widetilde{\mathcal{O}}(\epsilon {-4}) for the C-NC case. For fair comparison with existing algorithms, we also analyze the complexity bound to find \epsilon -stationary point of the primal function \phi for the constrained NC-C problem, which shows that our algorithm can improve the complexity bound from \widetilde{\mathcal{O}}(\epsilon {-3}) to \mathcal{O}(\epsilon {-2}) .
Oracle Complexity in Nonsmooth Nonconvex Optimization
It is well-known that given a smooth, bounded-from-below, and possibly nonconvex function, standard gradient-based methods can find \epsilon -stationary points (with gradient norm less than \epsilon) in \mathcal{O}(1/\epsilon 2) iterations. However, many important nonconvex optimization problems, such as those associated with training modern neural networks, are inherently not smooth, making these results inapplicable. In this paper, we study nonsmooth nonconvex optimization from an oracle complexity viewpoint, where the algorithm is assumed to be given access only to local information about the function at various points. We provide two main results (under mild assumptions): First, we consider the problem of getting \emph{near} \epsilon -stationary points. This is perhaps the most natural relaxation of \emph{finding} \epsilon -stationary points, which is impossible in the nonsmooth nonconvex case.